44 research outputs found

    Report on the 2nd International Workshop on Transforms in Behavioral and Affective Computing (THECOG 2022) at CIKM 2022

    Get PDF
    Human decision making is central in many functions across a broad spectrum of fields including marketing, investments and smart contracts, digital health, political campaigns, logistics, and strategic management to name only a few. Computational behavioral science, the focus of the second consecutive iteration of the international workshop on transforms in behavioral and affective computing (THECOG) which was held in conjunction with CIKM 2022, not only studies the various psychological, cultural, and social factors contributing to decision making besides reasoning, but it also seeks to construct robust, scalable, and efficient computational models imitating or extending decision making processes. This year the keynote speech focused on affective robotics and their expected advantages in substantially improving the quality of human life. Moreover, the accepted papers had a considerable topical variety covering among others smart cities, speech emotion recognition, deepfake discovery, and how smart coupons may influence online consumer behavior. THECOG 2022 for a second continuous year was a central meeting point where new results were presented. Date: 21 October, 2022. Website: https://www.cikm2022.org/workshops

    THECOG 2022 - Transforms In Behavioral And Affective Computing (Revisited)

    Get PDF
    Human decision making is central in many functions across a broad spectrum of fields such as marketing, investment, smart contract formulations, political campaigns, and organizational strategic management. Behavioral economics seeks to study the psychological, cultural, and social factors contributing to decision making along reasoning. It should be highlighted here that behavioral economics do not negate classical economic theory but rather extend it in two distinct directions. First, a finer granularity can be obtained by studying the decision making process not of massive populations but instead of individuals and groups with signal estimation or deep learning techniques based on a wide array of attributes ranging from social media posts to physiological signs. Second, time becomes a critical parameter and changes to the disposition towards alternative decisions can be tracked with input-output or state space models. The primary findings so far are concepts like bounded rationality and perceived risk, while results include optimal strategies for various levels of information awareness and action strategies based on perceived loss aversion principles. From the above it follows that behavioral economics relies on deep learning, signal processing, control theory, social media analysis, affective computing, natural language processing, and gamification to name only a few fields. Therefore, it is directly tied to computer science in many ways. THECOG will be a central meeting point for researchers of various backgrounds in order to generate new interdisciplinary and groundbreaking results

    THECOG - Transforms in Behavioral and Affective Computing

    Get PDF
    Human decision making is central in many functions across a broad spectrum of fields such as marketing, investment, smart contract formulations, political campaigns, and organizational strategic management. Behavioral economics seeks to study the psychological, cultural, and social factors contributing to decision making along reasoning. It should be highlighted here that behavioral economics do not negate classical economic theory but rather extend it in two distinct directions. First, a finer granularity can be obtained by studying the decision making process not of massive populations but instead of individuals and groups with signal estimation or deep learning techniques based on a wide array of attributes ranging from social media posts to physiological signs. Second, time becomes a critical parameter and changes to the disposition towards alternative decisions can be tracked with input-output or state space models. The primary findings so far are concepts like bounded rationality and perceived risk, while results include optimal strategies for various levels of information awareness and action strategies based on perceived loss aversion principles. From the above it follows that behavioral economics relies on deep learning, signal processing, control theory, social media analysis, affective computing, natural language processing, and gamification to name only a few fields. Therefore, it is directly tied to computer science in many ways. THECOG will be a central meeting point for researchers of various backgrounds in order to generate new interdisciplinary and groundbreaking results

    Proof systems in blockchains: A survey

    Get PDF
    © 2019 IEEE. Blockchain is a prime example of disruptive technology in multiple levels. With the advent of blockchains becomes obsolete the need for a mutually trusted third party acting as intermediary between agents which do not necessarily trust each other in transactions of any kind, including political or shareholder voting, crowdfunding, financial deals, logistics and supply chain management, and contract formulation. An integral part of the blockchain stack is the proof system, namely the mechanism efficiently verifying the claims of various blockchain stakeholders. Thus, trust is effectively established in a literally trustless environment with purely computational means. This is especially critical in the digital formulation of smart contracts where clauses are to be strictly upheld by intelligent agents. The most prominent proof systems recently proposed in the scientific literature are reviewed. Additionally, the applications of blockchain technology to smart contracts is discussed. The latter allows clause re-negotiation, increasing thus the flexibility factor in transactions. As a concrete example, a simple smart contract written in Solidity, a high level language for the Ethereum Virtual Machine, is presented

    Approximate High Dimensional Graph Mining With Matrix Polar Factorization: A Twitter Application

    Get PDF
    At the dawn of the Internet era graph analytics play an important role in high- and low-level network policymaking across a wide array of fields so diverse as transportation network design, supply chain engineering and logistics, social media analysis, and computer communication networks, to name just a few. This can be attributed not only to the size of the original graph but also to the nature of the problem parameters. For instance, algorithmic solutions depend heavily on the approximation criterion selection. Moreover, iterative or heuristic solutions are often sought as it is a high dimensional problem given the high number of vertices and edges involved as well as their complex interaction. Replacing under constraints a directed graph with an undirected one having the same vertex set is often sought in applications such as data visualization, community structure discovery, and connection-based vertex centrality metrics. Polar decomposition is a key matrix factorization which represents a matrix as a product of a symmetric positive (semi)definite factor and an orthogonal one. The former can be an undirected approximation of the original adjacency matrix. The proposed graph approximation has been tested with three Twitter graphs with encouraging results with respect to density, Fiedler number, and certain vertex centrality metrics based on matrix power series. The dataset was hosted in an online MongoDB instance

    Transform-based graph topology similarity metrics

    Get PDF
    Graph signal processing has recently emerged as a field with applications across a broad spectrum of fields including brain connectivity networks, logistics and supply chains, social media, computational aesthetics, and transportation networks. In this paradigm, signal processing methodologies are applied to the adjacency matrix, seen as a two-dimensional signal. Fundamental operations of this type include graph sampling, the graph Laplace transform, and graph spectrum estimation. In this context, topology similarity metrics allow meaningful and efficient comparisons between pairs of graphs or along evolving graph sequences. In turn, such metrics can be the algorithmic cornerstone of graph clustering schemes. Major advantages of relying on existing signal processing kernels include parallelism, scalability, and numerical stability. This work presents a scheme for training a tensor stack network to estimate the topological correlation coefficient between two graph adjacency matrices compressed with the two-dimensional discrete cosine transform, augmenting thus the indirect decompression with knowledge stored in the network. The results from three benchmark graph sequences are encouraging in terms of mean square error and complexity especially for graph sequences. An additional key point is the independence of the proposed method from the underlying domain semantics. This is primarily achieved by focusing on higher-order structural graph patterns

    Building Trusted Startup Teams from LinkedIn Attributes: A Higher Order Probabilistic Analysis

    Get PDF
    © 2020 IEEE. Startups arguably contribute to the current business landscape by developing innovative products and services. The discovery of business partners and employees with a specific background which can be verified stands out repeatedly as a prime obstacle. LinkedIn is a popular platform where professional milestones, endorsements, recommendations, and skills are posted. A graph search algorithm with a BFS and a DFS strategy for seeking trusted candidates in LinkedIn is proposed. Both strategies rely on a metric for assessing the trustworthiness of an account according to LinkedIn attributes. Also, a stochastic vertex selection mechanism reminiscent of preferential attachment guides search. Both strategies were verified against a large segment of the vivid startup ecosystem of Patras, Hellas. A higher order probabilistic analysis suggests that BFS is more suitable. Findings also imply that emphasis should be given to local networking events, peer interaction, and to tasks allowing verifiable credit for the respective work

    Higher Order Trust Ranking of LinkedIn Accounts with Iterative Matrix Methods

    Get PDF
    Trust is a fundamental sociotechnological mainstay of the Web today. There is substantial evidence about this since netizens implicitly or explicitly agree to trust virtually every Web service they use ranging from Web-based mail to e-commerce portals. Moreover the methodological framework for trusting individual netizens, primarily their identity and communications, has considerably progressed. Nevertheless, the core of fact checking for human generated content is still far from being substantially automated as most proposed smart algorithms capture inadequately fundamental human traits. One such case is the evaluation of the profile trustworthiness of LinkedIn members based on publicly available attributes available from the platform itself. A trusted profile may indirectly indicate a more suitable candidate since its contents can be easily verified. In this article a first order graph search mechanism for discovering LinkedIn trusted profiles based on a random walker is extended to higher order ranking based on a combination of functional and connectivity patterns. Results are derived for the same benchmark dataset and the first- and higher-order approaches are compared in terms of accuracy

    Simulating Blockchain Consensus Protocols in Julia: Proof of Work vs Proof of Stake

    Get PDF
    Consensus protocols constitute an important part in virtually any blockchain stack as they safeguard transaction validity and uniqueness. This task is achieved in a distributed manner by delegating it to certain nodes which, depending on the protocol, may further utilize the computational resources of other nodes. As a tangible incentive for nodes to verify transactions many protocols contain special reward mechanisms. They are typically inducement prizes aiming at increasing node engagement towards blockchain stability. This work presents the fundamentals of a probabilistic blockchain simulation tool for studying large transaction volumes over time. Two consensus protocols, the proof of work and the delegate proof of stake, are compared on the basis of the reward distribution and the probability bound of the reward exceeding its expected value. Also, the reward probability as a function of the network distance from the node initiating the transaction is studied

    Discovering Influential Twitter Authors Via Clustering And Ranking On Apache Storm

    Get PDF
    Nowadays several millions of people are throughout the day active, while hundreds of new accounts are created daily on social media. Thousands of short-length posts or tweets are posted on Twitter, a popular micro-blogging platform by a vast variety of authors and thus creating a widely diverse social content. The emerged diversity not only does indicate a remarkable strength, but also reveals a certain kind of difficulty when attempting to find Twitter’s authoritative and influencing authors. This work introduces a two-step algorithmic approach for discovering these authors. A set of metrics and features are, firstly, extracted from the social network e.g. friends and followers and the content of the tweets written by the author are extracted. Then, Twitter’s most authoritative authors are discovered by employing two distinct approaches, one which relies on probabilistic while the other applies fuzzy clustering. In particular, the former, initially, employs the Gaussian Mixture Model to identify the most authoritative authors and then introduces a novel ranking technique which relies on computing the cumulative Gaussian distribution of the extracted metrics and features. On the other hand, the latter combines the Gaussian Mixture Model with fuzzy c-means and subsequently the derived authors are ranked via the Borda count technique. The results indicate that the second scheme was able to find more authoritative authors in the benchmark dataset. Both approaches were designed, implemented, and executed on a local cluster of the Apache Storm framework, a cloud-based platform which supports streaming data and real-time scenarios
    corecore